Visual human-robot communication in social settings
نویسندگان
چکیده
Supporting human-robot interaction (HRI) in dynamic, multi-party social settings relies on a number of input and output modalities for visual human tracking, language processing, high-level reasoning, robot control, etc. Capturing visual human-centered information is a fundamental input source in HRI for effective and successful interaction. The current paper deals with visual processing in dynamic scenes and presents an integrated vision system that combines a number of different cues (such as color, depth, motion) to track and recognize human actions in challenging environments. The overall system comprises of a number of vision modules for human identification and tracking, extraction of pose-related information from body and face, identification of a specific set of communicative gestures (e.g. “waving, pointing”) as well as tracking of objects towards identification of manipulative gestures that act on objects in the environment (e.g. “grab glass”, “raise bottle”). Experimental results from a bartending scenario as well a comparative assessment of a subset of modules validate the effectiveness of the proposed system.
منابع مشابه
Real-Time Vision-Based Learning for Human-Robot Interaction in Social Humanoid Robotics
This report proposes a research topic in the field of human-robot interaction. The research will focus on real-time learning paradigm and its application in social robotics. Gestures, as part of more complex actions, will provide information necessary for a social robot to infer emotional state of a human demonstrator. The research may lead to further investigation of acceptance of humanoid rob...
متن کاملTowards Autonomous Child-Robot Interaction
The ALIZ-E project [1] aims at designing and developing long-term, adaptive social interaction between robots and child users (8-11 years old) in real-world settings, for which a conversational human-robot interaction system has been developed [2]. In this context we present the auditory and visual perception components that have been specifically build for the purpose of supporting verbal and ...
متن کاملReal-Time Vision-Based Learning for Human-Robot Interaction in Social Humanoid Robotics
This report proposes a research topic in the field of human-robot interaction. The research will focus on real-time learning paradigm and its application in social robotics. Gestures, as part of more complex actions, will provide information necessary for a social robot to infer emotional state of a human demonstrator. The research may lead to further investigation of acceptance of humanoid rob...
متن کاملSimulation of Position Based Visual Control and Performance Tests of 6R Robot
This paper presents simulation and experimental results of position-based visual servoing control process of a 6R robot using 2 fixed cameras. This method has the ability to deal with real time changes in the relative position of the target-object with respect to robot. Also, greater accuracy and independency of servo control structure from the target pose coordinates are the additional advanta...
متن کاملCognition-enabled Task Interpretation for Human-Robot Teams in a Simulation-based Search and Rescue Mission
Due to humans and robots complementary capabilities, mixed human-robot teams are considerably deployed in realworld settings. A favored communication means is the natural language used by humans that is still a challenge for robotic teammates. They need to understand the environment from the viewpoint of their human teammates in order to be able to translate the instructions received in the nat...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2013